Uploaded image for project: 'Red Hat OpenStack Services on OpenShift'
  1. Red Hat OpenStack Services on OpenShift
  2. OSPRH-8824

Manila tempest tests need to support config options required for VAST Data Driver

XMLWordPrintable

    • Manila tempest test changes for VAST Driver
    • False
    • Hide

      None

      Show
      None
    • False
    • Not Selected
    • Committed
    • No Docs Impact
    • To Do
    • Committed
    • Proposed
    • 0% To Do, 0% In Progress, 100% Done
    • Moderate

      https://review.opendev.org/c/openstack/manila-tempest-plugin/+/921419 adds new configuration options to run share shrink/extend scenario tests with the VAST driver. These tests will be run as part of the driver's certification with RHOSO 18.

       

      The following IRC conversation sheds some light on the motives. 

      [02:15:33 PM] <fnn45> Hi gouthamr. ... We have some limitations in our system and we wold like to discuss them. I prepared mr which can help https://review.opendev.org/c/openstack/manila-tempest-plugin/+/921419.
      [02:18:15 PM] <gouthamr> are all these related to the same limitation?
      [02:18:41 PM] <gouthamr> all these = the three settings you're tuning with config opts
      [02:21:39 PM] <fnn45> Yes. Vast storage has async nature regarding to propagating capacity limits. We never bother about this because customers usually dont care if the're able to write lets say 0.5 - 1 GB more the allowed limit. But here we faced issues
      [02:27:16 PM] <gouthamr> yes; that was going to be my next question
      [02:27:26 PM] <gouthamr> how big of a skew is it
      [02:27:48 PM] <gouthamr> if its a small/negligible percentage over the quota assigned to the user, its okay. these scenario tests are expected to enforce expectations of consistency.. i have a couple of further questions, that i can take to the patch
      [02:30:09 PM] <gouthamr> and your responses there will allow us to engage other reviewers
      [02:33:39 PM] <fnn45> I'd underline 2 separate issues here. First: user can write more data then share capacity. How big is this overhead? For small shares (1 GB for instance) it can be around half of share iow 0.5 GB before hitting an error. For bigger shares this limit is smaller. Second issue is also about capacity propagation. After changing share capacity there is some time (up to 10 seconds) where this new limit is
      [02:33:39 PM] <fnn45> invisible by vast cluster. To address these two issues I introduces additional parameters. One of them is additional blocks (64mb) to write. Second one is sleep between write operations/extend-shrink operation to make sure changes were applied properly
      [02:34:37 PM] <gouthamr> thanks for sharing this context

              rhn-engineering-gpachara Goutham Pacha Ravi
              rhn-engineering-gpachara Goutham Pacha Ravi
              rhos-dfg-storage-squad-manila
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

                Created:
                Updated:
                Resolved: