-
Task
-
Resolution: Unresolved
-
Normal
-
None
-
None
-
False
-
-
False
-
subs-swatch-lightning
-
-
Entrance Criteria:
How to figure out what our database storage currently costs, or a rate, or something. https://redhat-internal.slack.com/archives/C022YV4E0NA/p1765911336091749
Task Objective
Conduct storage capacity research to populate a dedicated section of the epic's research/design document with data storage impact estimates for implementing "Always On" capacity tracking.
Research Focus
Estimate the increase in data storage (and associated costs) required to transition from the current opt-in model to "Always On" (100% opt-in equivalent) for capacity-related data.
Technical Approach
• Analyze current org_config table (opted-in orgs) vs. total org population
• Correlate to capacity-related table volumes: subscriptions, contracts, subscription_measurements, billable_usage_remittance, etc.
• Assess data sources: IT Services (Partner Gateway & Subscription Service)
• Estimate storage scaling requirements for always-on processing
Key Definition
"Always On" = Auto opt-in ALL orgs when SWATCH receives data (vs. current filtering by org_config)
Scope: Capacity-related tables only (not usage/tally tables)
Deliverable
Create a "Storage Capacity Impact" section of SWATCH-4367 epic document with:
• Current vs. projected data volume estimates (by table and data source)
• Database storage cost impact projections
Acceptance Criteria
• Document contains quantified storage impact estimates (current vs. always-on)
• Include database storage cost projections for increased data volume
- is cloned by
-
SWATCH-4403 High Level estimation of data storage impact of "always on" usage and host data
-
- New
-
- relates to
-
SWATCH-3673 Decentralize DB & Enforce Service Data Ownership
-
- Backlog
-