Uploaded image for project: 'Red Hat OpenStack Services on OpenShift'
  1. Red Hat OpenStack Services on OpenShift
  2. OSPRH-9202

Metering group metadata should be included in a label of ceilometer compute metrics

XMLWordPrintable

    • Include metering group into ceilometer compute metrics
    • 3
    • False
    • Hide

      None

      Show
      None
    • False
    • OBSDA-880Enhance Metrics Collected by RHOSO for FP1
    • Not Selected
    • Planned
    • Proposed
    • Committed
    • To Do
    • OBSDA-880 - Enhance Metrics Collected by RHOSO for FP1
    • telemetry-operator-container-1.0.4-4
    • Proposed
    • Proposed
    • 0% To Do, 29% In Progress, 71% Done
    • Hide
      .Autoscaling improvements

      Autoscaling has been updated to use the server_group metadata. This improves the stability of the autoscaling feature.
      For more information, see link:https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html-single/autoscaling_for_instances/index[Autoscaling for instances]
      Show
      .Autoscaling improvements Autoscaling has been updated to use the server_group metadata. This improves the stability of the autoscaling feature. For more information, see link: https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html-single/autoscaling_for_instances/index [Autoscaling for instances]
    • Enhancement
    • Done

      Epic Overview

      In gnocchi-based autoscaling, the metering.server_group instance metadata was used to group instances belonging to the same stack together. Unfortunately this metadata isn't known to Prometheus (and thus to Aodh or Heat) in RHOSO, which left us with only one option for how to group instances together - instance names. The need for grouping instances by names is very sensitive to configuration issues. Autoscaling stacks need to be configured in a way, where the instances have predictable names (or parts of their names) and at the same time other instances (either standalone instances or instances from other stacks) need to be named differently. If this isn't true, then autoscaling won't work correctly.

      In this epic we should try to re-enable the usage of metering.server_group for autoscaling by collecting this instance metadata with ceilometer and saving it in prometheus as a label.

      Goals

      As a user of the Autoscaling feature I'll be able to use metering.server_group instance metadata for instance grouping instead of error-prone instance name regexes.

      Requirements

      A list of specific needs or objectives that a Epic must deliver to satisfy the Feature.. Some requirements will be flagged as MVP. If an MVP gets shifted, the epic shifts.  If a non MVP requirement slips, it does not shift the epic.

      requirement Notes is Mvp?
      metering instance metadata collected and published by ceilometer   yes
      the metadata received and exposed by sg-core   yes
           

       

      Customer Considerations

      The current way for autoscaling instance grouping will still work. Customers will be able to choose if they want to still use name regexes to group instances or if they want to switch to using metering.server_group

      Documentation Considerations

      This will include an update to the autoscaling guide. The current autoscaling guide tells the customers about how to create a stacks which will result in instances with similar names and then how to use those names in autoscaling alarm queries.

      After the epic is implemented, we will be able to simplify the documentation, customers won't need to care about how their autoscaled instances get named. They won't need to carefuly construct queries, they'll just configure a metering.server_group metadata and use that in a label of their queries.

      This will result in small changes to the text and the heat templates will need to be adjusted.

              rh-ee-jwysogla Jaromir Wysoglad
              rh-ee-jwysogla Jaromir Wysoglad
              rhos-dfg-cloudops
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

                Created:
                Updated:
                Resolved: