-
Story
-
Resolution: Unresolved
-
Major
-
None
-
Product / Portfolio Work
-
5
-
False
-
-
False
-
None
-
Unset
-
Impediment
-
None
-
-
-
Today, when notifications-engine processes a Kafka message from the platform.notifications.ingress topic, it will first check whether the message ID (payload field or rh-message-id Kafka header) was already processed by any notifications-engine pods. To make that check possible, all message IDs are stored for 24 hours in the Notifications DB and eventually purged by a nightly cronjob. When the message ID is already known, notifications-engine simply ignores the Kafka message.
In order to lower the pressure on the Notifications DB (IOPS), we need to move the message IDs from Postgres to a remote cache and stop running a DB query every time we need to determine if a Kafka message is a duplicate.
Acceptance criteria:
- Switching from the DB to a remote cache to detect Kafka duplicates is controlled with a feature flag declared in Unleash.
- The deduplication logic is updated to rely on a remote cache rather than a PostgreSQL table.
- The cronjob that purges the data from the DB is eventually removed.
To ensure necessary functionality (Slack thread), this should be implemented using the Vert.x Mutiny API as described in the Quarkus reference guide.
- is blocked by
-
RHCLOUD-36096 [notifications-engine] Enable remote caching in AWS ElastiCache
-
- Closed
-
- mentioned on