Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-30631

SNO (RT kernel) sosreport crash the SNO node

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Won't Do
    • Icon: Major Major
    • None
    • 4.12.z
    • RHCOS
    • Important
    • No
    • False
    • Hide

      None

      Show
      None

      Description of problem:

      sosreport collection causes SNO XR11 node crash.
      

      Version-Release number of selected component (if applicable):

      - RHOCP    : 4.12.30
      - kernel   : 4.18.0-372.69.1.rt7.227.el8_6.x86_64
      - platform : x86_64

      How reproducible:

      sh-4.4# chrt -rr 99 toolbox
      .toolboxrc file detected, overriding defaults...
      Checking if there is a newer version of ocpdalmirror.xxx.yyy:8443/rhel8/support-tools-zzz-feb available...
      Container 'toolbox-root' already exists. Trying to start...
      (To remove the container and start with a fresh toolbox, run: sudo podman rm 'toolbox-root')
      toolbox-root
      Container started successfully. To exit, type 'exit'.
      [root@node /]# which sos
      /usr/sbin/sos
      logger: socket /dev/log: No such file or directory
      [root@node /]# taskset -c 29-31,61-63 sos report --batch -n networking,kernel,processor -k crio.all=on -k crio.logs=on -k podman.all=on -kpodman.logs=on
      
      sosreport (version 4.5.6)
      
      This command will collect diagnostic and configuration information from
      this Red Hat CoreOS system.
      
      An archive containing the collected information will be generated in
      /host/var/tmp/sos.c09e4f7z and may be provided to a Red Hat support
      representative.
      
      Any information provided to Red Hat will be treated in accordance with
      the published support policies at:
      
              Distribution Website : https://www.redhat.com/
              Commercial Support   : https://access.redhat.com/
      
      The generated archive may contain data considered sensitive and its
      content should be reviewed by the originating organization before being
      passed to any third party.
      
      No changes will be made to system configuration.
      
      
       Setting up archive ...
       Setting up plugins ...
      [plugin:auditd] Could not open conf file /etc/audit/auditd.conf: [Errno 2] No such file or directory: '/etc/audit/auditd.conf'
      caught exception in plugin method "system.setup()"
      writing traceback to sos_logs/system-plugin-errors.txt
      [plugin:systemd] skipped command 'resolvectl status': required services missing: systemd-resolved.
      [plugin:systemd] skipped command 'resolvectl statistics': required services missing: systemd-resolved.
       Running plugins. Please wait ...
      
        Starting 1/91  alternatives    [Running: alternatives]
        Starting 2/91  atomichost      [Running: alternatives atomichost]
        Starting 3/91  auditd          [Running: alternatives atomichost auditd]
        Starting 4/91  block           [Running: alternatives atomichost auditd block]
        Starting 5/91  boot            [Running: alternatives auditd block boot]
        Starting 6/91  cgroups         [Running: auditd block boot cgroups]
        Starting 7/91  chrony          [Running: auditd block cgroups chrony]
        Starting 8/91  cifs            [Running: auditd block cgroups cifs]
        Starting 9/91  conntrack       [Running: auditd block cgroups conntrack]
        Starting 10/91 console         [Running: block cgroups conntrack console]
        Starting 11/91 container_log   [Running: block cgroups conntrack container_log]
        Starting 12/91 containers_common [Running: block cgroups conntrack containers_common]
        Starting 13/91 crio            [Running: block cgroups conntrack crio]
        Starting 14/91 crypto          [Running: cgroups conntrack crio crypto]
        Starting 15/91 date            [Running: cgroups conntrack crio date]
        Starting 16/91 dbus            [Running: cgroups conntrack crio dbus]
        Starting 17/91 devicemapper    [Running: cgroups conntrack crio devicemapper]
        Starting 18/91 devices         [Running: cgroups conntrack crio devices]
        Starting 19/91 dracut          [Running: cgroups conntrack crio dracut]
        Starting 20/91 ebpf            [Running: cgroups conntrack crio ebpf]
        Starting 21/91 etcd            [Running: cgroups crio ebpf etcd]
        Starting 22/91 filesys         [Running: cgroups crio ebpf filesys]
        Starting 23/91 firewall_tables [Running: cgroups crio filesys firewall_tables]
        Starting 24/91 fwupd           [Running: cgroups crio filesys fwupd]
        Starting 25/91 gluster         [Running: cgroups crio filesys gluster]
        Starting 26/91 grub2           [Running: cgroups crio filesys grub2]
        Starting 27/91 gssproxy        [Running: cgroups crio grub2 gssproxy]
        Starting 28/91 hardware        [Running: cgroups crio grub2 hardware]
        Starting 29/91 host            [Running: cgroups crio hardware host]
        Starting 30/91 hts             [Running: cgroups crio hardware hts]
        Starting 31/91 i18n            [Running: cgroups crio hardware i18n]
        Starting 32/91 iscsi           [Running: cgroups crio hardware iscsi]
        Starting 33/91 jars            [Running: cgroups crio hardware jars]
        Starting 34/91 kdump           [Running: cgroups crio hardware kdump]
        Starting 35/91 kernelrt        [Running: cgroups crio hardware kernelrt]
        Starting 36/91 keyutils        [Running: cgroups crio hardware keyutils]
        Starting 37/91 krb5            [Running: cgroups crio hardware krb5]
        Starting 38/91 kvm             [Running: cgroups crio hardware kvm]
        Starting 39/91 ldap            [Running: cgroups crio kvm ldap]
        Starting 40/91 libraries       [Running: cgroups crio kvm libraries]
        Starting 41/91 libvirt         [Running: cgroups crio kvm libvirt]
        Starting 42/91 login           [Running: cgroups crio kvm login]
        Starting 43/91 logrotate       [Running: cgroups crio kvm logrotate]
        Starting 44/91 logs            [Running: cgroups crio kvm logs]
        Starting 45/91 lvm2            [Running: cgroups crio logs lvm2]
        Starting 46/91 md              [Running: cgroups crio logs md]
        Starting 47/91 memory          [Running: cgroups crio logs memory]
        Starting 48/91 microshift_ovn  [Running: cgroups crio logs microshift_ovn]
        Starting 49/91 multipath       [Running: cgroups crio logs multipath]
        Starting 50/91 networkmanager  [Running: cgroups crio logs networkmanager]
      
      Removing debug pod ...
      error: unable to delete the debug pod "ransno1ransnomavdallabcom-debug": Delete "https://api.ransno.mavdallab.com:6443/api/v1/namespaces/openshift-debug-mt82m/pods/ransno1ransnomavdallabcom-debug": dial tcp 10.71.136.144:6443: connect: connection refused
      

      Steps to Reproduce:

      Launch a debug pod and the procedure above and it crash the node

      Actual results:

      Node crash

      Expected results:

      Node does not crash

      Additional info:

      We have two vmcore on the associated SFDC ticket.
      This system use a RT kernel.
      Using an out of tree ice driver 1.13.7 (probably from 22 dec 2023)
      
      [  103.681608] ice: module unloaded
      [  103.830535] ice: loading out-of-tree module taints kernel.
      [  103.831106] ice: module verification failed: signature and/or required key missing - tainting kernel
      [  103.841005] ice: Intel(R) Ethernet Connection E800 Series Linux Driver - version 1.13.7
      [  103.841017] ice: Copyright (C) 2018-2023 Intel Corporation
      
      
      With the following kernel command line 
      
      Command line: BOOT_IMAGE=(hd0,gpt3)/ostree/rhcos-f2c287e549b45a742b62e4f748bc2faae6ca907d24bb1e029e4985bc01649033/vmlinuz-4.18.0-372.69.1.rt7.227.el8_6.x86_64 ignition.platform.id=metal ostree=/ostree/boot.1/rhcos/f2c287e549b45a742b62e4f748bc2faae6ca907d24bb1e029e4985bc01649033/0 root=UUID=3e8bda80-5cf4-4c46-b139-4c84cb006354 rw rootflags=prjquota boot=UUID=1d0512c2-3f92-42c5-b26d-709ff9350b81 intel_iommu=on iommu=pt firmware_class.path=/var/lib/firmware skew_tick=1 nohz=on rcu_nocbs=3-31,35-63 tuned.non_isolcpus=00000007,00000007 systemd.cpu_affinity=0,1,2,32,33,34 intel_iommu=on iommu=pt isolcpus=managed_irq,3-31,35-63 nohz_full=3-31,35-63 tsc=nowatchdog nosoftlockup nmi_watchdog=0 mce=off rcutree.kthread_prio=11 default_hugepagesz=1G rcupdate.rcu_normal_after_boot=0 efi=runtime module_blacklist=irdma intel_pstate=passive intel_idle.max_cstate=0 crashkernel=256M
      
      
      
      vmcore1 show issue with the ice driver 
      
      crash vmcore tmp/vmlinux
      
      
            KERNEL: tmp/vmlinux  [TAINTED]
          DUMPFILE: vmcore  [PARTIAL DUMP]
              CPUS: 64
              DATE: Thu Mar  7 17:16:57 CET 2024
            UPTIME: 02:44:28
      LOAD AVERAGE: 24.97, 25.47, 25.46
             TASKS: 5324
          NODENAME: aaa.bbb.ccc
           RELEASE: 4.18.0-372.69.1.rt7.227.el8_6.x86_64
           VERSION: #1 SMP PREEMPT_RT Fri Aug 4 00:21:46 EDT 2023
           MACHINE: x86_64  (1500 Mhz)
            MEMORY: 127.3 GB
             PANIC: "Kernel panic - not syncing:"
               PID: 693
           COMMAND: "khungtaskd"
              TASK: ff4d1890260d4000  [THREAD_INFO: ff4d1890260d4000]
               CPU: 0
             STATE: TASK_RUNNING (PANIC)
      
      crash> ps|grep sos                                                                                                                                                                                                                                                                                                           
        449071  363440  31  ff4d189005f68000  IN   0.2  506428 314484  sos                                                                                                                                                                                                                                                         
        451043  363440  63  ff4d188943a9c000  IN   0.2  506428 314484  sos                                                                                                                                                                                                                                                         
        494099  363440  29  ff4d187f941f4000  UN   0.2  506428 314484  sos     
      
       8457.517696] ------------[ cut here ]------------
      [ 8457.517698] NETDEV WATCHDOG: ens3f1 (ice): transmit queue 35 timed out
      [ 8457.517711] WARNING: CPU: 33 PID: 349 at net/sched/sch_generic.c:472 dev_watchdog+0x270/0x300
      [ 8457.517718] Modules linked in: binfmt_misc macvlan pci_pf_stub iavf vfio_pci vfio_virqfd vfio_iommu_type1 vfio vhost_net vhost vhost_iotlb tap tun xt_addrtype nf_conntrack_netlink ip6t_REJECT nf_reject_ipv6 ipt_REJECT nf_reject_ipv4 xt_nat xt_CT tcp_diag inet_diag ip6t_MASQUERADE xt_mark ice(OE) xt_conntrack ipt_MASQUERADE nft_counter xt_comment nft_compat veth nft_chain_nat nf_tables overlay bridge 8021q garp mrp stp llc nfnetlink_cttimeout nfnetlink openvswitch nf_conncount nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 ext4 mbcache jbd2 intel_rapl_msr iTCO_wdt iTCO_vendor_support dell_smbios wmi_bmof dell_wmi_descriptor dcdbas kvm_intel kvm irqbypass intel_rapl_common i10nm_edac nfit libnvdimm x86_pkg_temp_thermal intel_powerclamp coretemp rapl ipmi_ssif intel_cstate intel_uncore dm_thin_pool pcspkr isst_if_mbox_pci dm_persistent_data dm_bio_prison dm_bufio isst_if_mmio isst_if_common mei_me i2c_i801 joydev mei intel_pmt wmi acpi_ipmi ipmi_si acpi_power_meter sctp ip6_udp_tunnel
      [ 8457.517770]  udp_tunnel ip_tables xfs libcrc32c i40e sd_mod t10_pi sg bnxt_re ib_uverbs ib_core crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel bnxt_en ahci libahci libata dm_multipath dm_mirror dm_region_hash dm_log dm_mod ipmi_devintf ipmi_msghandler fuse [last unloaded: ice]
      [ 8457.517784] Red Hat flags: eBPF/rawtrace
      [ 8457.517787] CPU: 33 PID: 349 Comm: ktimers/33 Kdump: loaded Tainted: G           OE    --------- -  - 4.18.0-372.69.1.rt7.227.el8_6.x86_64 #1
      [ 8457.517789] Hardware name: Dell Inc. PowerEdge XR11/0P2RNT, BIOS 1.12.1 09/13/2023
      [ 8457.517790] RIP: 0010:dev_watchdog+0x270/0x300
      [ 8457.517793] Code: 17 00 e9 f0 fe ff ff 4c 89 e7 c6 05 c6 03 34 01 01 e8 14 43 fa ff 89 d9 4c 89 e6 48 c7 c7 90 37 98 9a 48 89 c2 e8 1d be 88 ff <0f> 0b eb ad 65 8b 05 05 13 fb 65 89 c0 48 0f a3 05 1b ab 36 01 73
      [ 8457.517795] RSP: 0018:ff7aeb55c73c7d78 EFLAGS: 00010286
      [ 8457.517797] RAX: 0000000000000000 RBX: 0000000000000023 RCX: 0000000000000001
      [ 8457.517798] RDX: 0000000000000000 RSI: ffffffff9a908557 RDI: 00000000ffffffff
      [ 8457.517799] RBP: 0000000000000021 R08: ffffffff9ae6b3a0 R09: 00080000000000ff
      [ 8457.517800] R10: 000000006443a462 R11: 0000000000000036 R12: ff4d187f4d1f4000
      [ 8457.517801] R13: ff4d187f4d20df00 R14: ff4d187f4d1f44a0 R15: 0000000000000080
      [ 8457.517803] FS:  0000000000000000(0000) GS:ff4d18967a040000(0000) knlGS:0000000000000000
      [ 8457.517804] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      [ 8457.517805] CR2: 00007fc47c649974 CR3: 00000019a441a005 CR4: 0000000000771ea0
      [ 8457.517806] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
      [ 8457.517807] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
      [ 8457.517808] PKRU: 55555554
      [ 8457.517810] Call Trace:
      [ 8457.517813]  ? test_ti_thread_flag.constprop.50+0x10/0x10
      [ 8457.517816]  ? test_ti_thread_flag.constprop.50+0x10/0x10
      [ 8457.517818]  call_timer_fn+0x32/0x1d0
      [ 8457.517822]  ? test_ti_thread_flag.constprop.50+0x10/0x10
      [ 8457.517825]  run_timer_softirq+0x1fc/0x640
      [ 8457.517828]  ? _raw_spin_unlock_irq+0x1d/0x60
      [ 8457.517833]  ? finish_task_switch+0xea/0x320
      [ 8457.517836]  ? __switch_to+0x10c/0x4d0
      [ 8457.517840]  __do_softirq+0xa5/0x33f
      [ 8457.517844]  run_timersd+0x61/0xb0
      [ 8457.517848]  smpboot_thread_fn+0x1c1/0x2b0
      [ 8457.517851]  ? smpboot_register_percpu_thread_cpumask+0x140/0x140
      [ 8457.517853]  kthread+0x151/0x170
      [ 8457.517856]  ? set_kthread_struct+0x50/0x50
      [ 8457.517858]  ret_from_fork+0x1f/0x40
      [ 8457.517861] ---[ end trace 0000000000000002 ]---
      [ 8458.520445] ice 0000:8a:00.1 ens3f1: tx_timeout: VSI_num: 14, Q 35, NTC: 0x99, HW_HEAD: 0x14, NTU: 0x15, INT: 0x0
      [ 8458.520451] ice 0000:8a:00.1 ens3f1: tx_timeout recovery level 1, txqueue 35
      [ 8506.139246] ice 0000:8a:00.1: PTP reset successful
      [ 8506.437047] ice 0000:8a:00.1: VSI rebuilt. VSI index 0, type ICE_VSI_PF
      [ 8506.445482] ice 0000:8a:00.1: VSI rebuilt. VSI index 1, type ICE_VSI_CTRL
      [ 8540.459707] ice 0000:8a:00.1 ens3f1: tx_timeout: VSI_num: 14, Q 35, NTC: 0xe3, HW_HEAD: 0xe7, NTU: 0xe8, INT: 0x0
      [ 8540.459714] ice 0000:8a:00.1 ens3f1: tx_timeout recovery level 1, txqueue 35
      [ 8563.891356] ice 0000:8a:00.1: PTP reset successful
      ~~~
      
      Second vmcore on the same node show issue with the SSD drive
      
      $ crash vmcore-2 tmp/vmlinux
      
            KERNEL: tmp/vmlinux  [TAINTED]
          DUMPFILE: vmcore-2  [PARTIAL DUMP]
              CPUS: 64
              DATE: Thu Mar  7 14:29:31 CET 2024
            UPTIME: 1 days, 07:19:52
      LOAD AVERAGE: 25.55, 26.42, 28.30
             TASKS: 5409
          NODENAME: aaa.bbb.ccc
           RELEASE: 4.18.0-372.69.1.rt7.227.el8_6.x86_64
           VERSION: #1 SMP PREEMPT_RT Fri Aug 4 00:21:46 EDT 2023
           MACHINE: x86_64  (1500 Mhz)
            MEMORY: 127.3 GB
             PANIC: "Kernel panic - not syncing:"
               PID: 696
           COMMAND: "khungtaskd"
              TASK: ff2b35ed48d30000  [THREAD_INFO: ff2b35ed48d30000]
               CPU: 34
             STATE: TASK_RUNNING (PANIC)
      
      crash> ps |grep sos
        719784  718369  62  ff2b35ff00830000  IN   0.4 1215636 563388  sos
        721740  718369  61  ff2b3605579f8000  IN   0.4 1215636 563388  sos
        721742  718369  63  ff2b35fa5eb9c000  IN   0.4 1215636 563388  sos
        721744  718369  30  ff2b3603367fc000  IN   0.4 1215636 563388  sos
        721746  718369  29  ff2b360557944000  IN   0.4 1215636 563388  sos
        743356  718369  62  ff2b36042c8e0000  IN   0.4 1215636 563388  sos
        743818  718369  29  ff2b35f6186d0000  IN   0.4 1215636 563388  sos
        748518  718369  61  ff2b3602cfb84000  IN   0.4 1215636 563388  sos
        748884  718369  62  ff2b360713418000  UN   0.4 1215636 563388  sos
      
      crash> dmesg
      
      [111871.309883] ata3.00: exception Emask 0x0 SAct 0x3ff8 SErr 0x0 action 0x6 frozen
      [111871.309889] ata3.00: failed command: WRITE FPDMA QUEUED
      [111871.309891] ata3.00: cmd 61/40:18:28:47:4b/00:00:00:00:00/40 tag 3 ncq dma 32768 out
                               res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
      [111871.309895] ata3.00: status: { DRDY }
      [111871.309897] ata3.00: failed command: WRITE FPDMA QUEUED
      [111871.309904] ata3.00: cmd 61/40:20:68:47:4b/00:00:00:00:00/40 tag 4 ncq dma 32768 out
                               res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
      [111871.309908] ata3.00: status: { DRDY }
      [111871.309909] ata3.00: failed command: WRITE FPDMA QUEUED
      [111871.309910] ata3.00: cmd 61/40:28:a8:47:4b/00:00:00:00:00/40 tag 5 ncq dma 32768 out
                               res 40/00:ff:00:00:00/00:00:00:00:00/40 Emask 0x4 (timeout)
      [111871.309913] ata3.00: status: { DRDY }
      [111871.309914] ata3.00: failed command: WRITE FPDMA QUEUED
      [111871.309915] ata3.00: cmd 61/40:30:e8:47:4b/00:00:00:00:00/40 tag 6 ncq dma 32768 out
                               res 40/00:01:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
      [111871.309918] ata3.00: status: { DRDY }
      [111871.309919] ata3.00: failed command: WRITE FPDMA QUEUED
      [111871.309919] ata3.00: cmd 61/70:38:48:37:2b/00:00:1c:00:00/40 tag 7 ncq dma 57344 out
                               res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
      [111871.309922] ata3.00: status: { DRDY }
      [111871.309923] ata3.00: failed command: WRITE FPDMA QUEUED
      [111871.309924] ata3.00: cmd 61/20:40:78:29:0c/00:00:19:00:00/40 tag 8 ncq dma 16384 out
                               res 40/00:ff:00:00:00/00:00:00:00:00/40 Emask 0x4 (timeout)
      [111871.309927] ata3.00: status: { DRDY }
      [111871.309928] ata3.00: failed command: WRITE FPDMA QUEUED
      [111871.309929] ata3.00: cmd 61/08:48:08:0c:c0/00:00:1c:00:00/40 tag 9 ncq dma 4096 out
                               res 40/00:ff:00:00:00/00:00:00:00:00/40 Emask 0x4 (timeout)
      [111871.309932] ata3.00: status: { DRDY }
      [111871.309933] ata3.00: failed command: WRITE FPDMA QUEUED
      [111871.309934] ata3.00: cmd 61/40:50:28:48:4b/00:00:00:00:00/40 tag 10 ncq dma 32768 out
                               res 40/00:01:00:00:00/00:00:00:00:00/40 Emask 0x4 (timeout)
      [111871.309937] ata3.00: status: { DRDY }
      [111871.309938] ata3.00: failed command: WRITE FPDMA QUEUED
      [111871.309939] ata3.00: cmd 61/40:58:68:48:4b/00:00:00:00:00/40 tag 11 ncq dma 32768 out
                               res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
      [111871.309942] ata3.00: status: { DRDY }
      [111871.309943] ata3.00: failed command: WRITE FPDMA QUEUED
      [111871.309944] ata3.00: cmd 61/40:60:a8:48:4b/00:00:00:00:00/40 tag 12 ncq dma 32768 out
                               res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
      [111871.309946] ata3.00: status: { DRDY }
      [111871.309947] ata3.00: failed command: WRITE FPDMA QUEUED
      [111871.309948] ata3.00: cmd 61/40:68:e8:48:4b/00:00:00:00:00/40 tag 13 ncq dma 32768 out
                               res 40/00:01:00:00:00/00:00:00:00:00/40 Emask 0x4 (timeout)
      [111871.309951] ata3.00: status: { DRDY }
      [111871.309953] ata3: hard resetting link
      ...
      ...
      ...
      [112789.787310] INFO: task sos:748884 blocked for more than 600 seconds.                                                                                                                                                                                                                                                     
      [112789.787314]       Tainted: G           OE    --------- -  - 4.18.0-372.69.1.rt7.227.el8_6.x86_64 #1                                                                                                                                                                                                                      
      [112789.787316] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.                                                                                                                                                                                                                                    
      [112789.787316] task:sos             state:D stack:    0 pid:748884 ppid:718369 flags:0x00084080                                                                                                                                                                                                                             
      [112789.787320] Call Trace:                                                                                                                                                                                                                                                                                                  
      [112789.787323]  __schedule+0x37b/0x8e0                                                                                                                                                                                                                                                                                      
      [112789.787330]  schedule+0x6c/0x120                                                                                                                                                                                                                                                                                         
      [112789.787333]  schedule_timeout+0x2b7/0x410                                                                                                                                                                                                                                                                                
      [112789.787336]  ? enqueue_entity+0x130/0x790                                                                                                                                                                                                                                                                                
      [112789.787340]  wait_for_completion+0x84/0xf0                                                                                                                                                                                                                                                                               
      [112789.787343]  flush_work+0x120/0x1d0                                                                                                                                                                                                                                                                                      
      [112789.787347]  ? flush_workqueue_prep_pwqs+0x130/0x130                                                                                                                                                                                                                                                                     
      [112789.787350]  schedule_on_each_cpu+0xa7/0xe0                                                                                                                                                                                                                                                                              
      [112789.787353]  vmstat_refresh+0x22/0xa0                                                                                                                                                                                                                                                                                    
      [112789.787357]  proc_sys_call_handler+0x174/0x1d0                                                                                                                                                                                                                                                                           
      [112789.787361]  vfs_read+0x91/0x150                                                                                                                                                                                                                                                                                         
      [112789.787364]  ksys_read+0x52/0xc0                                                                                                                                                                                                                                                                                         
      [112789.787366]  do_syscall_64+0x87/0x1b0                                                                                                                                                                                                                                                                                    
      [112789.787369]  entry_SYSCALL_64_after_hwframe+0x61/0xc6                                                                                                                                                                                                                                                                    
      [112789.787372] RIP: 0033:0x7f2dca8c2ab4                                                                                                                                                                                                                                                                                     
      [112789.787378] Code: Unable to access opcode bytes at RIP 0x7f2dca8c2a8a.                                                                                                                                                                                                                                                   
      [112789.787378] RSP: 002b:00007f2dbbffc5e0 EFLAGS: 00000246 ORIG_RAX: 0000000000000000                                                                                                                                                                                                                                       
      [112789.787380] RAX: ffffffffffffffda RBX: 0000000000000008 RCX: 00007f2dca8c2ab4                                                                                                                                                                                                                                            
      [112789.787382] RDX: 0000000000004000 RSI: 00007f2db402b5a0 RDI: 0000000000000008                                                                                                                                                                                                                                            
      [112789.787383] RBP: 00007f2db402b5a0 R08: 0000000000000000 R09: 00007f2dcace27bb                                                                                                                                                                                                                                            
      [112789.787383] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000004000                                                                                                                                                                                                                                            
      [112789.787384] R13: 0000000000000008 R14: 00007f2db402b5a0 R15: 00007f2da4001a90                                                                                                                                                                                                                                            
      [112789.787418] NMI backtrace for cpu 34    

            jpena@redhat.com Javier Pena
            rhn-support-jpeyrard Johann Peyrard
            Michael Nguyen Michael Nguyen
            Votes:
            0 Vote for this issue
            Watchers:
            7 Start watching this issue

              Created:
              Updated:
              Resolved: