Uploaded image for project: 'Subscription Watch'
  1. Subscription Watch
  2. SWATCH-3001

Spike: Design replacement for hourly aggregation of billable usages

XMLWordPrintable

    • Icon: Story Story
    • Resolution: Unresolved
    • Icon: Major Major
    • None
    • None
    • swatch-billable-usage
    • None
    • 5
    • False
    • Hide

      None

      Show
      None
    • False

      In order to improve swatch-billable-usage's resiliency and observability we need to replace the kstreams implementation that aggregates billable usages with a cron job. Currently kstreams reads messages from the billable-usage topic and groups messages together that are received within an hour of each other and sends one billable-usage-hourly-aggregate message out. Instead we need to create a query that will fetch all usages for a given billingAccountId/product/metric and create the billable-usage-hourly-aggregate message.

      Diagram: https://drive.google.com/file/d/1myDaC74bzA8DO6QX9IF_qUdkc6pL0qQv/view

      Slack discussion: https://redhat-internal.slack.com/archives/C01F7QFNATC/p1727775760775579

      Things to consider:

      • How will we make sure each aggregate has a unique key and timestamp?(discussed batch_timestamp field)
      • New columns or table for tracking aggregation status and timestamps
      • What happens during a failure of swatch-billable-usage if hourly-aggregate message sends and database is not update? (duplicate messages should not be processed by producers)
      • What happens if the hourly-aggregate message gets sent and the azure/aws producers fail during processing?(status is not updated but a usage was sent to the marketplace)

      Done:

      • Stories created
      • Documentation of new plan

              Unassigned Unassigned
              kflahert@redhat.com Kevin Flaherty
              Votes:
              0 Vote for this issue
              Watchers:
              1 Start watching this issue

                Created:
                Updated: