Uploaded image for project: 'RHEL'
  1. RHEL
  2. RHEL-2281

fuse mounted volume misbehaves when qemu did an operation on the file

Linking RHIVOS CVEs to...Migration: Automation ...SWIFT: POC ConversionSync from "Extern...XMLWordPrintable

    • None
    • Important
    • rhel-sst-rh-ceph-storage
    • ssg_rh_storage
    • None
    • False
    • False
    • Hide

      None

      Show
      None
    • None
    • None
    • None
    • None
    • ---
    • None
    • 57,005

      Description of problem:

      Hi guys.

      I do:
      -> $ qemu-img info /00-VMs/enc.ocp0node.qcow2
      then:
      -> $ ll /00-VMs/enc.ocp0node.qcow2
      ls: cannot access '/00-VMs/enc.ocp0node.qcow2': No such file or directory
      then:
      -> $ llr /00-VMs/
      and again:
      -> $ ll /00-VMs/enc.ocp0node.qcow2
      rw-rr-. 1 root root 1640628704 May 16 13:34 /00-VMs/enc.ocp0node.qcow2

      thus ! unless such a glob operation is not make then system/gluster will keep saying that the file does not exist.

      10.1.0.100:/VMs on /00-VMs type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,allow_other,max_read=131072)

      -> $ gluster volume info

      Volume Name: VMs
      Type: Distributed-Replicate
      Volume ID: 14055448-6161-4b4a-a029-6303f5e60c0b
      Status: Started
      Snapshot Count: 0
      Number of Bricks: 1 x (2 + 1) = 3
      Transport-type: tcp
      Bricks:
      Brick1: 10.1.0.100:/devs/00.GLUSTERs/VMs
      Brick2: 10.1.0.101:/devs/00.GLUSTERs/VMs
      Brick3: 10.1.0.99:/devs/00.GLUSTERs/VMs-arbiter (arbiter)
      Options Reconfigured:
      auth.reject: 10.3.1.0/24
      auth.allow: 10.1.0.100,10.1.0.101,10.1.0.99
      performance.parallel-readdir: on
      performance.readdir-ahead: on
      performance.nl-cache-timeout: 600
      performance.nl-cache: on
      features.cache-invalidation-timeout: 600
      performance.stat-prefetch: on
      performance.cache-invalidation: on
      performance.client-io-threads: off
      transport.address-family: inet
      storage.fips-mode-rchecksum: on
      cluster.granular-entry-heal: on
      storage.owner-uid: 107
      storage.owner-gid: 107
      cluster.shd-max-threads: 3

      Version-Release number of selected component (if applicable):

      glusterfs-server-11.0-1.el9s.x86_64

      How reproducible:

      Steps to Reproduce:
      1.
      2.
      3.

      Actual results:

      Expected results:

      Additional info:

              sheggodu@redhat.com Sunil Kumar Heggodu Gopala Acharya
              lejeczek Paweł Eljasz (Inactive)
              Sunil Kumar Heggodu Gopala Acharya Sunil Kumar Heggodu Gopala Acharya
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

                Created:
                Updated:
                Resolved: