-
Task
-
Resolution: Done
-
Major
-
None
-
None
-
None
-
None
-
False
-
-
False
-
ANSTRAT-137 - Insights reports for Partners (metric collection)
-
-
-
Sprint ending Aug 17, 2023, Sprint ending Sept 14, 2023, Sprint ending Nov 9, 2023
Analytics Data Exporter svc for Hub data.
It's similar as Data Exporter for Controller data.
There are several architecture options, how to design the services and data flow - from Processor to Exporter.
Current Controller flow:
Processor
- checks DB tenants.messages
- import data
- update DB tenants.tenant (refresh_rollup=True)
- update DB tenant.rollup_job (status=new)
Rollup:
- checks DB tenants.tenant refresh_rollup in a loop
- updates DB tenant.rollup_job (status: new -> running)
- works
- updates DB tenant.rollup_job (status: new -> finished)
- updates DB tenants.tenant (refresh_rollup = False)
- produces Kafka message (Data Export Kafka)
Data Exporter:
- checks Kafka messages (gets tenant_id)
- gets DB tenant.rollup_job (status: finished)
- exports data
- updates DB tenants.rollup_job (status: finished -> data_export_finished)
—
Processor Hub flow:
Processor Hub doesn't do a Rollup, thus there is no service which can let Data Exporter know through Kafka when using the same architecture.
There are options a) - g)
https://miro.com/app/board/o9J_l2qb64U=/?moveToWidget=3458764563373964224&cot=14
Rollups:
a) shared rollups
- although hub doesn't need rollups, it can be easy to implement and follow current architecture
b) two rollups
- more configurable than a)
- more clean design than a)
- requires more CPU and RAM than a)
- requires extension of Grafana and Alerts
c) no rollups for hub
- doesn't use rollups worker when it actually doesn't need it
- makes different behaviours of processors - harder to undestand the architecture
- needs rollups implementation in the processor - not a good design
- with d) it needs kafka producer (not available now)
Notification delivery to Data Exporter
d) Kafka
- current sync between Controller rollups -> exporter
- may require extension to transfer "source" of data (in the kafka)
- harder to implement with c)
- requires new kafka consumer group with g)
e) DB tenants.tenants
- skip kafka and use the same mechanism between Processor->Rollups and between Rollup->Exporter (and ETL)
- better, unified, cross-services design
- compatible with c) and g)
- more work
Data Exporter
f) shared exporter
- easier to implement
- compatible with d)
- less configurable
- worse design
g) two exporters
- clean architecture
- requires more CPU and RAM
- more configurable
- requires extension of grafana and alerts
- requres new Kafka consumer group with d)
- is related to
-
AA-1876 [AA-1783] AA Data Exporter: Switch Kafka to Loop
- Closed