Uploaded image for project: 'RHEL'
  1. RHEL
  2. RHEL-7693

[RFE] Add new MD RAID resource agent (i.e. successor to upstream 'Raid1' agent) (RHEL9)

    • Medium
    • rhel-sst-high-availability
    • 13
    • False
    • Hide

      None

      Show
      None
    • None
    • None
    • None
    • None
    • Enhancement
    • None

      +++ This bug was initially created as a clone of Bug #1810577 +++

      Description of problem:
      The upstream resource-agents package contains a functionally limited 'Raid1' resource agent relative to supporting MD Clustered RAID1/10. This is the feature bz to track the development of a 'Raid1' cloned pacemaker resource agent with cluster enhancements. For instance, 'Raid1' has no notion of an MD array being clustered or not, thus can't reject creation of a respective resource properly or automatically adjust to it.

      Version-Release number of selected component (if applicable):
      All

      How reproducible:

      Steps to Reproduce:
      1.
      2.
      3.

      Actual results:

      Expected results:

      Additional info:
      See bz dependency for initial use/test case examples (subject to be copied and enhanced in this bz later)

      — Additional comment from Heinz Mauelshagen on 2020-03-13 17:31:32 CET —

      Including test cases from closed 1810561 here.

      Test cases:

      • active-passive
      • create MD resources for all levels (0/1/4/5/6/10) with force_clones=false
      • check they get started properly
      • put (single host) filesystems (xfs, ext4, gfs2) on top and load with I/O
      • define ordering constraints for ^
      • move them to other node manually
      • disable/enable them
      • add/remove legs to/from raid1
      • fail/fence a node and check if they get started correctly
        on another online node
      • fail access to MD array leg(s) on its active node
        and analyse MD/resource/filesystem behaviour
      • reboot whole cluster and check for resource properly started
      • update the resource option force_clones to true
      • clone the resource (should fail on single host MD arrays but it doesn't)
      • test for data corruption (even with gfs2 on top)
      • remove all resources in/out of order and check for proper removal
      • active-active
      • create clustered MD resources for levels 1 and 10
      • check they get started properly
      • clone them
      • check they get started properly on all other online nodes
      • add/remove legs to/from RAID1
      • try reshaping RAID10
      • disable/enable them
      • fail a node and check if they get started correctly on another node
      • fail access to MD array leg(s) on any/multiple nodes
        and analyse MD/resource/filesystem behaviour
      • remove all resources in/out of order and check for proper removal

      — Additional comment from Oyvind Albrigtsen on 2020-04-15 16:44:14 CEST —

      https://github.com/ClusterLabs/resource-agents/pull/1481

              rhn-engineering-oalbrigt Oyvind Albrigtsen
              rhn-engineering-oalbrigt Oyvind Albrigtsen
              Oyvind Albrigtsen Oyvind Albrigtsen
              Cluster QE Cluster QE
              Votes:
              0 Vote for this issue
              Watchers:
              19 Start watching this issue

                Created:
                Updated: